Periodic Finite State Controllers for Efficient POMDP and DEC-POMDP Planning

نویسندگان

  • Joni Pajarinen
  • Jaakko Peltonen
چکیده

Applications such as robot control and wireless communication require planning under uncertainty. Partially observable Markov decision processes (POMDPs) plan policies for single agents under uncertainty and their decentralized versions (DEC-POMDPs) find a policy for multiple agents. The policy in infinite-horizon POMDP and DEC-POMDP problems has been represented as finite state controllers (FSCs). We introduce a novel class of periodic FSCs, composed of layers connected only to the previous and next layer. Our periodic FSC method finds a deterministic finite-horizon policy and converts it to an initial periodic infinitehorizon policy. This policy is optimized by a new infinite-horizon algorithm to yield deterministic periodic policies, and by a new expectation maximization algorithm to yield stochastic periodic policies. Our method yields better results than earlier planningmethods and can compute larger solutions than with regular FSCs.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Planning under uncertainty for large-scale problems with applications to wireless networking ; Päätöksenteko epävarmuuden vallitessa suurissa ongelmissa ja sovelluksia langattomaan tiedonsiirtoon

Aalto University, P.O. Box 11000, FI-00076 Aalto www.aalto.fi Author Joni Pajarinen Name of the doctoral dissertation Planning under uncertainty for large-scale problems with applications to wireless networking Publisher School of Science Unit Department of Information and Computer Science Series Aalto University publication series DOCTORAL DISSERTATIONS 20/2013 Field of research Computer and I...

متن کامل

Dual Formulations for Optimizing Dec-POMDP Controllers

Decentralized POMDP is an expressive model for multiagent planning. Finite-state controllers (FSCs)—often used to represent policies for infinite-horizon problems—offer a compact, simple-to-execute policy representation. We exploit novel connections between optimizing decentralized FSCs and the dual linear program for MDPs. Consequently, we describe a dual mixed integer linear program (MIP) for...

متن کامل

Solving Multi-agent Decision Problems Modeled as Dec-POMDP: A Robot Soccer Case Study

Robot soccer is one of the major domains for studying the coordination of multi-robot teams. Decentralized Partially Observable Markov Decision Process (Dec-POMDP) is a recent mathematical framework which has been used to model multi-agent coordination. In this work, we model simple robot soccer as Dec-POMDP and solve it using an algorithm which is based on the approach detailed in [1]. This al...

متن کامل

Optimally Solving Dec-POMDPs as Continuous-State MDPs

Decentralized partially observable Markov decision processes (Dec-POMDPs) provide a general model for decision-making under uncertainty in decentralized settings, but are difficult to solve optimally (NEXP-Complete). As a new way of solving these problems, we introduce the idea of transforming a Dec-POMDP into a continuous-state deterministic MDPwith a piecewise-linear and convex value function...

متن کامل

Supervisor Synthesis of POMDP based on Automata Learning

As a general and thus popular model for autonomous systems, partially observable Markov decision process (POMDP) can capture uncertainties from different sources like sensing noises, actuation errors, and uncertain environments. However, its comprehensiveness makes the planning and control in POMDP difficult. Traditional POMDP planning problems target to find the optimal policy to maximize the ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011